15 research outputs found

    Batch Reinforcement Learning on the Industrial Benchmark: First Experiences

    Full text link
    The Particle Swarm Optimization Policy (PSO-P) has been recently introduced and proven to produce remarkable results on interacting with academic reinforcement learning benchmarks in an off-policy, batch-based setting. To further investigate the properties and feasibility on real-world applications, this paper investigates PSO-P on the so-called Industrial Benchmark (IB), a novel reinforcement learning (RL) benchmark that aims at being realistic by including a variety of aspects found in industrial applications, like continuous state and action spaces, a high dimensional, partially observable state space, delayed effects, and complex stochasticity. The experimental results of PSO-P on IB are compared to results of closed-form control policies derived from the model-based Recurrent Control Neural Network (RCNN) and the model-free Neural Fitted Q-Iteration (NFQ). Experiments show that PSO-P is not only of interest for academic benchmarks, but also for real-world industrial applications, since it also yielded the best performing policy in our IB setting. Compared to other well established RL techniques, PSO-P produced outstanding results in performance and robustness, requiring only a relatively low amount of effort in finding adequate parameters or making complex design decisions

    A Benchmark Environment Motivated by Industrial Control Problems

    Full text link
    In the research area of reinforcement learning (RL), frequently novel and promising methods are developed and introduced to the RL community. However, although many researchers are keen to apply their methods on real-world problems, implementing such methods in real industry environments often is a frustrating and tedious process. Generally, academic research groups have only limited access to real industrial data and applications. For this reason, new methods are usually developed, evaluated and compared by using artificial software benchmarks. On one hand, these benchmarks are designed to provide interpretable RL training scenarios and detailed insight into the learning process of the method on hand. On the other hand, they usually do not share much similarity with industrial real-world applications. For this reason we used our industry experience to design a benchmark which bridges the gap between freely available, documented, and motivated artificial benchmarks and properties of real industrial problems. The resulting industrial benchmark (IB) has been made publicly available to the RL community by publishing its Java and Python code, including an OpenAI Gym wrapper, on Github. In this paper we motivate and describe in detail the IB's dynamics and identify prototypic experimental settings that capture common situations in real-world industry control problems

    A Benchmark Environment Motivated by Industrial Control Problems

    Full text link
    In the research area of reinforcement learning (RL), frequently novel and promising methods are developed and introduced to the RL community. However, although many researchers are keen to apply their methods on real-world problems, implementing such methods in real industry environments often is a frustrating and tedious process. Generally, academic research groups have only limited access to real industrial data and applications. For this reason, new methods are usually developed, evaluated and compared by using artificial software benchmarks. On one hand, these benchmarks are designed to provide interpretable RL training scenarios and detailed insight into the learning process of the method on hand. On the other hand, they usually do not share much similarity with industrial real-world applications. For this reason we used our industry experience to design a benchmark which bridges the gap between freely available, documented, and motivated artificial benchmarks and properties of real industrial problems. The resulting industrial benchmark (IB) has been made publicly available to the RL community by publishing its Java and Python code, including an OpenAI Gym wrapper, on Github. In this paper we motivate and describe in detail the IB's dynamics and identify prototypic experimental settings that capture common situations in real-world industry control problems

    Greed Is Good: Exploration and Exploitation Trade-offs in Bayesian Optimisation

    Get PDF
    The performance of acquisition functions for Bayesian optimisation to locate the global optimum of continuous functions is investigated in terms of the Pareto front between exploration and exploitation. We show that Expected Improvement (EI) and the Upper Confidence Bound (UCB) always select solutions to be expensively evaluated on the Pareto front, but Probability of Improvement is not guaranteed to do so and Weighted Expected Improvement does so only for a restricted range of weights. We introduce two novel ϵ-greedy acquisition functions. Extensive empirical evaluation of these together with random search, purely exploratory, and purely exploitative search on 10 benchmark problems in 1 to 10 dimensions shows that ϵ-greedy algorithms are generally at least as effective as conventional acquisition functions (e.g. EI and UCB), particularly with a limited budget. In higher dimensions ϵ-greedy approaches are shown to have improved performance over conventional approaches. These results are borne out on a real world computational fluid dynamics optimisation problem and a robotics active learning problem. Our analysis and experiments suggest that the most effective strategy, particularly in higher dimensions, is to be mostly greedy, occasionally selecting a random exploratory solution

    Reinforcement Learning mit adaptiver Steuerung von Exploration und Exploitation

    No full text
    Durch Reinforcement Learning kann intelligentes Verhalten anhand sensomotorischer Interaktion erlernt werden. Die Art und Weise des Lernens ist neurobiologisch und psychologisch motiviert, wobei ein künstlicher Lernagent "Reward" von seiner Umgebung, für den Nutzen getätigter Aktionen, erhält. Hieraus ergibt sich das Lernziel, den kumulierten Reward zu optimieren, was durch eine zielgerichtete Aktionswahl erreicht werden kann. Neben der Wahl von Aktionen, die bereits erworbenes Wissen über die Umgebung ausnutzen (Exploitation), müssen auch Aktionen gewählt werden, die die Dynamik der Umgebung erkunden (Exploration). Gleichzeitig düfen nicht zu viele Explorationsaktionen getätigt werden, um niedrigen Reward durch schlechte Aktionen zu vermeiden; aber auch nicht zu wenige, um das Wissen über die langfristige Auswirkung von Aktionen möglichst präzise abschätzen zu können. Diese Dissertation stellt neue Explorationsstrategien für Reinforcement Learning in diskreten Aktionsräumen vor. Es wird das Ziel verfolgt, die Explorationsrate eines Lernagenten nicht global vom Experimentator festlegen zu lassen, sondern durch "Meta-Learning", auf Basis des Lernfortschritts, zu adaptieren. Hierfür werden zum Einen "Wertunterschiede" als Maß für die "Sicherheit über die Auswirkung von Aktionen" verwendet, die beim Neuschätzen der Wertefunktion entstehen. In einem weiteren Ansatz werden "stochastische Neuronen" eingesetzt, um das Explorationsverhalten nicht nur lokal, sondern auch global steuern zu können. Ebenso werden die technischen Beiträge dieser Arbeiten in den Kontext der Neurobiologie eingeordnet, in welcher die folgenden Neurotransmitter eine wichtige Rolle spielen: Dopamin (TD-Error), Acetylcholin (Lernrate) und Norepinephrin (Explorationsrate). Da das Explorationsverhalten nicht explizit vom Experimentator vorgegeben wird, sondern vom Inneren des Lernagenten heraus ensteht, sind die Ergebnisse dieser Arbeit ein wichtiger Schritt in Richtung vollautonome Systeme
    corecore